24 research outputs found

    Efficient algorithms for passive network measurement

    Get PDF
    Network monitoring has become a necessity to aid in the management and operation of large networks. Passive network monitoring consists of extracting metrics (or any information of interest) by analyzing the traffic that traverses one or more network links. Extracting information from a high-speed network link is challenging, given the great data volumes and short packet inter-arrival times. These difficulties can be alleviated by using extremely efficient algorithms or by sampling the incoming traffic. This work improves the state of the art in both these approaches. For one-way packet delay measurement, we propose a series of improvements over a recently appeared technique called Lossy Difference Aggregator. A main limitation of this technique is that it does not provide per-flow measurements. We propose a data structure called Lossy Difference Sketch that is capable of providing such per-flow delay measurements, and, unlike recent related works, does not rely on any model of packet delays. In the problem of collecting measurements under the sliding window model, we focus on the estimation of the number of active flows and in traffic filtering. Using a common approach, we propose one algorithm for each problem that obtains great accuracy with significant resource savings. In the traffic sampling area, the selection of the sampling rate is a crucial aspect. The most sensible approach involves dynamically adjusting sampling rates according to network traffic conditions, which is known as adaptive sampling. We propose an algorithm called Cuckoo Sampling that can operate with a fixed memory budget and perform adaptive flow-wise packet sampling. It is based on a very simple data structure and is computationally extremely lightweight. The techniques presented in this work are thoroughly evaluated through a combination of theoretical and experimental analysis.Postprint (published version

    Network Polygraph: An innovative network visibility service (not only) for NRENs

    Get PDF
    We present Network Polygraph (https://polygraph.io), a network monitoring service based on NetFlow/IPFIX that benefits from more than 10 years of research in the fields of network monitoring and traffic classification at UPC BarcelonaTech. The main novelty of this tool is that it can be deployed as a service, either on premises or in the cloud, which makes it much easier and cheaper to deploy, while still obtaining classification accuracies similar to those tools based on DPI. Network Polygraph’s technology also solves the limitations of previous machine learning based algorithms proposed by the research community, by integrating an unique, fully-automatic retraining system that allows the system to retrain itself, without human intervention, when there is a change in the traffic characteristics or the tool is deployed in a new environment. Network Polygraph has been successfully deployed in several Research and Education Networks, including CSUC and RedIRIS, which allowed us to fine tune the tool for NRENs and add some features specifically tailored to them.Peer ReviewedPostprint (published version

    Validation and improvement of the Lossy Difference Aggregator to measure packet delays

    No full text
    Abstract. One-way packet delay is an important network performance metric. Recently, a new data structure called Lossy Difference Aggregator (LDA) has been proposed to estimate this metric more efficiently than with the classical approaches of sending individual packet timestamps or probe traffic. This work presents an independent validation of the LDA algorithm and provides an improved analysis that results in a 20 % increase in the number of packet delay samples collected by the algorithm. We also extend the analysis by relating the number of collected samples to the accuracy of the LDA and provide additional insight on how to parametrize it. Finally, we extend the algorithm to overcome some of its practical limitations and validate our analysis using real network traffic.

    Distributed scheduling in large scale monitoring infrastructures

    No full text
    Network monitoring is becoming a necessity for network operators, who usually deploy several monitoring applications that aid in tasks such as traffic engineering, capacity planning and the detection of attacks or other anomalies. There is also an increasing interest in large-scale network monitoring infrastructures that can run multiple applications in several network viewpoints [4]

    Analysis of YouTube User Experience from Passive Measurements

    No full text
    In this paper, we analyze the YouTube service and the traffic generated from its usage. The purpose of this study is to identify by strictly using passive measurements the information that can be used as metrics or indicators of the progress of individual video sessions and to estimate the impact of these metrics in the user experience. We find a novel method to track the progress of the video playback that, in contrast to previous works, does not require instrumentation of the video player neither browser-based plug-ins. Instead, we extract important statistical information about the status of the playback by reverse engineering the metrics in related HTTP requests that are generated during playback. For the purpose of collecting these metrics, a tool was developed to perform YouTube traffic measurements by means of passive network monitoring in a large university campus network. The analysis of the obtained data revealed the most important sources of initial delay in the sessions as well as buffer outage events and download rate statistics. Further analysis revealed the impact of video advertisements and re-buffering events on the user experience in terms of video abandonment rate

    A lightweight algorithm for traffic filtering over sliding windows

    No full text
    The problem of testing whether a packet belongs to a set of filtered addresses has been traditionally addressed using Bloom filters. They have a small memory footprint and require few memory accesses per query and insertion, while presenting a small probability of false positive. The problem of automatic eviction of filtered addresses after a pre-configured time window is more challenging, since it requires tracking insertion times for later removal. This has been achieved in the literature by replacing the Bloom filter's vector of bits for a vector of timestamps. This approach precisely expires old items from the filter, but has a large memory footprint. We present a novel Bloom filter based data structure that features approximate information expiration. This small extra source of error allows for a more compact filter representation, thus becoming more suitable to fit in more expensive, faster memory. Additionally, our data structure is more flexible in that it allows for balancing the trade-off between filtering and expiration accuracy. Our experiments show that this method can obtain up to orders of magnitude higher overall accuracy than the time-stamp approach using the same amount of memory.Peer Reviewe
    corecore